Texture Synthesis with Grouplets
نویسندگان
چکیده
منابع مشابه
Texture Synthesis with Prioritized Pixel Re-synthesis
In this paper, we propose a new patch-based texture synthesis method. The core of the proposed method consists of two main components: (1) a feature-weighted similarity measurement to search for the best match and (2) a dynamically prioritized-based pixel re-synthesis to reduce discontinuity at the boundary of adjacent patches. Examples and experimental comparisons with other previous methods a...
متن کاملTexture Synthesis with Spatial Generative Adversarial Networks
Generative adversarial networks (GANs) [7] are a recent approach to train generative models of data, which have been shown to work particularly well on image data. In the current paper we introduce a new model for texture synthesis based on GAN learning. By extending the input noise distribution space from a single vector to a whole spatial tensor, we create an architecture with properties well...
متن کاملSynthesis of Cyclic Motions with Texture
Motion capture data is useful to an animator because it captures the exact style of a particular individual’s movements and has a life-like quality. However, often the data is not exactly what the animator needs. We demonstrate a method for using motion capture data as a starting point for creating synthetic motion data that addresses this problem by allowing the animator to specify hard constr...
متن کاملAudio Texture Synthesis with Scattering Moments
We introduce an audio texture synthesis algorithm based on scattering moments. A scattering transform is computed by iteratively decomposing a signal with complex wavelet filter banks and computing their amplitude envelop. Scattering moments provide general representations of stationary processes computed as expected values of scattering coefficients. They are estimated with low variance estima...
متن کاملTexture Synthesis with Recurrent Variational Auto-Encoder
We propose a recurrent variational auto-encoder for texture synthesis. A novel loss function, FLTBNK, is used for training the texture synthesizer. It is rotational and partially color invariant loss function. Unlike L2 loss, FLTBNK explicitly models the correlation of color intensity between pixels. Our texture synthesizer 1 generates neighboring tiles to expand a sample texture and is evaluat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2010
ISSN: 0162-8828
DOI: 10.1109/tpami.2009.54